gamp algorithm
Optimal Spectral Transitions in High-Dimensional Multi-Index Models
Defilippis, Leonardo, Dandi, Yatin, Mergny, Pierre, Krzakala, Florent, Loureiro, Bruno
We consider the problem of how many samples from a Gaussian multi-index model are required to weakly reconstruct the relevant index subspace. Despite its increasing popularity as a testbed for investigating the computational complexity of neural networks, results beyond the single-index setting remain elusive. In this work, we introduce spectral algorithms based on the linearization of a message passing scheme tailored to this problem. Our main contribution is to show that the proposed methods achieve the optimal reconstruction threshold. Leveraging a high-dimensional characterization of the algorithms, we show that above the critical threshold the leading eigenvector correlates with the relevant index subspace, a phenomenon reminiscent of the Baik-Ben Arous-Peche (BBP) transition in spiked models arising in random matrix theory. Supported by numerical experiments and a rigorous theoretical framework, our work bridges critical gaps in the computational limits of weak learnability in multi-index model.
- North America > United States (0.46)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.24)
- Europe > Switzerland (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > Russia > Northwestern Federal District > Leningrad Oblast > Saint Petersburg (0.14)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (7 more...)
Approximate Message Passing with Spectral Initialization for Generalized Linear Models
Mondelli, Marco, Venkataramanan, Ramji
We consider the problem of estimating a signal from measurements obtained via a generalized linear model. We focus on estimators based on approximate message passing (AMP), a family of iterative algorithms with many appealing features: the performance of AMP in the high-dimensional limit can be succinctly characterized under suitable model assumptions; AMP can also be tailored to the empirical distribution of the signal entries, and for a wide class of estimation problems, AMP is conjectured to be optimal among all polynomial-time algorithms. However, a major issue of AMP is that in many models (such as phase retrieval), it requires an initialization correlated with the ground-truth signal and independent from the measurement matrix. Assuming that such an initialization is available is typically not realistic. In this paper, we solve this problem by proposing an AMP algorithm initialized with a spectral estimator. With such an initialization, the standard AMP analysis fails since the spectral estimator depends in a complicated way on the design matrix. Our main technical contribution is the construction and analysis of a two-phase artificial AMP algorithm that first produces the spectral estimator, and then closely approximates the iterates of the true AMP. Our analysis yields a rigorous characterization of the performance of AMP with spectral initialization in the high-dimensional limit. We also provide numerical results that demonstrate the validity of the proposed approach.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Austria (0.04)
High-dimensional macroeconomic forecasting using message passing algorithms
As a response to the increasing linkages between the macroeconomy and the financial sector, as well as the expanding interconnectedness of the global economy, empirical macroeconomic models have increased both in complexity and size. For that reason, estimation of modern models that inform macroeconomic decisions - such as linear and nonlinear versions of dynamic stochastic general equilibrium (DSGE) and vector autoregressive (VAR) models - many times relies on Bayesian inference via powerful Markov chain Monte Carlo (MCMC) methods. 1 However, existing posterior simulation algorithms cannot scale up to very high-dimensions due to the computational inefficiency and the larger numerical error associated with repeated sampling via Monte Carlo; see Angelino et al. (2016) for a thorough review of such computational issues from a machine learning and high-dimensional data perspective. In that respect, while Bayesian inference is a natural probabilistic framework for learning about parameters by utilizing all information in the data likelihood and prior, computational restrictions might make it less suitable for supporting real-time decision-making in very high dimensions. This paper introduces to the econometric literature the framework of factor graphs (Kschischang et al., 2001) for the purpose of designing computationally efficient, and easy to maintain, Bayesian estimation algorithms. The focus is not only on "faster" posterior inference broadly interpreted, but on designing algorithms that have such low complexity that are future-proof and can be used in high-dimensional econometric problems with possibly thousands or millions of coefficients.
- Europe > United Kingdom (0.27)
- North America > United States > California (0.14)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
A Compressed Sensing Approach for Distribution Matching
Dia, Mohamad, Aref, Vahid, Schmalen, Laurent
In this work, we formulate the fixed-length distribution matching as a Bayesian inference problem. Our proposed solution is inspired from the compressed sensing paradigm and the sparse superposition (SS) codes. First, we introduce sparsity in the binary source via position modulation (PM). We then present a simple and exact matcher based on Gaussian signal quantization. At the receiver, the dematcher exploits the sparsity in the source and performs low-complexity dematching based on generalized approximate message-passing (GAMP). We show that GAMP dematcher and spatial coupling lead to asymptotically optimal performance, in the sense that the rate tends to the entropy of the target distribution with vanishing reconstruction error in a proper limit. Furthermore, we assess the performance of the dematcher on practical Hadamard-based operators. A remarkable feature of our proposed solution is the possibility to: i) perform matching at the symbol level (nonbinary); ii) perform joint channel coding and matching.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
A GAMP Based Low Complexity Sparse Bayesian Learning Algorithm
Al-Shoukairi, Maher, Schniter, Philip, Rao, Bhaskar D.
Abstract--In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning (SBL). In particular, GGAMP is used to implement the E-step in SBL in place of matrix inversion, leveraging the fact that GGAMP is guaranteed to converge with appropriate damping. The resulting GGAMP-SBL algorithm is much more robust to arbitrary measurement matrix A than the standard damped GAMP algorithm while being much lower complexity than the standard SBL algorithm. We then extend the approach from the single measurement vector (SMV) case to the temporally correlated multiple measurement vector (MMV) case, leading to the GGAMP-TSBL algorithm. We verify the robustness and computational advantages of the proposed algorithms through numerical experiments. The problem of sparse signal recovery (SSR) and the related problem of compressed sensing have received much attention in recent years [1]-[6]. Despite the difficulty in solving this problem [7], an important finding in recent years is that for a sufficiently sparse x and a well designed A, accurate recovery is possible by techniques such as basis pursuit and orthogonal matching pursuit [8]- [10]. The SSR problem has seen considerable advances on the algorithmic front and they include iteratively reweighted algorithms [11]-[13] and Bayesian techniques [14]-[20], among others. Two Bayesian techniques related to this work are the generalized approximate message passing (GAMP) and the sparse Bayesian learning (SBL) algorithms.
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > New Jersey > Hudson County > Secaucus (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (3 more...)
Fast Low-Rank Bayesian Matrix Completion with Hierarchical Gaussian Prior Models
Yang, Linxiao, Fang, Jun, Duan, Huiping, Li, Hongbin, Zeng, Bing
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
- Asia > China > Sichuan Province > Chengdu (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (3 more...)
Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning
Kamilov, Ulugbek, Rangan, Sundeep, Unser, Michael, Fletcher, Alyson K.
We consider the estimation of an i.i.d.\ vector $\xbf \in \R^n$ from measurements $\ybf \in \R^m$ obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector $\xbf$. The proposed algorithm is a generalization of a recently-developed method by Vila and Schniter that uses expectation-maximization (EM) iterations where the posteriors in the E-steps are computed via approximate message passing. The techniques can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linear-nonlinear cascade models in dynamical systems and neural spiking processes. We prove that for large i.i.d.\ Gaussian transform matrices the asymptotic componentwise behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar state evolution equations. This analysis shows that the adaptive GAMP method can yield asymptotically consistent parameter estimates, which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of complex linear-nonlinear models with provable guarantees.
- Europe > Russia > Northwestern Federal District > Leningrad Oblast > Saint Petersburg (0.14)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (7 more...)